Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [1]:
import pickle

training_file = 'train.p'
validation_file = 'valid.p'
testing_file = 'test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train_org, y_train_org = train['features'], train['labels']
X_valid_org, y_valid_org = valid['features'], valid['labels']
X_test_org , y_test_org  = test['features'],  test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [2]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results

# Number of training examples
n_train = X_train_org.shape[0]

# Number of testing examples.
n_test = X_test_org.shape[0]

# What's the shape of an traffic sign image?
image_shape = X_train_org.shape[1:3]

# How many unique classes/labels there are in the dataset.
n_classes = max(y_train_org)+1

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [3]:
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline

import pandas as pd
df = pd.read_csv('./signnames.csv')
df
Out[3]:
ClassId SignName
0 0 Speed limit (20km/h)
1 1 Speed limit (30km/h)
2 2 Speed limit (50km/h)
3 3 Speed limit (60km/h)
4 4 Speed limit (70km/h)
5 5 Speed limit (80km/h)
6 6 End of speed limit (80km/h)
7 7 Speed limit (100km/h)
8 8 Speed limit (120km/h)
9 9 No passing
10 10 No passing for vehicles over 3.5 metric tons
11 11 Right-of-way at the next intersection
12 12 Priority road
13 13 Yield
14 14 Stop
15 15 No vehicles
16 16 Vehicles over 3.5 metric tons prohibited
17 17 No entry
18 18 General caution
19 19 Dangerous curve to the left
20 20 Dangerous curve to the right
21 21 Double curve
22 22 Bumpy road
23 23 Slippery road
24 24 Road narrows on the right
25 25 Road work
26 26 Traffic signals
27 27 Pedestrians
28 28 Children crossing
29 29 Bicycles crossing
30 30 Beware of ice/snow
31 31 Wild animals crossing
32 32 End of all speed and passing limits
33 33 Turn right ahead
34 34 Turn left ahead
35 35 Ahead only
36 36 Go straight or right
37 37 Go straight or left
38 38 Keep right
39 39 Keep left
40 40 Roundabout mandatory
41 41 End of no passing
42 42 End of no passing by vehicles over 3.5 metric ...
In [30]:
def visualise_dataset(X_train,y_train):
    for i in range(0,X_train.shape[0],int(X_train.shape[0]/400)):  
        image =  X_train[i,:,:,:]
        fig, ax = plt.subplots()
        ax.imshow(image)
        plt.show()
        sign_id = y_train[i]
        sign_name = df.values[int(sign_id),-1]
        print('{1}, ID: {0}'.format(sign_id,sign_name))
    
visualise_dataset(X_train_org,y_train_org)       
End of no passing, ID: 41
End of no passing, ID: 41
End of no passing, ID: 41
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Go straight or right, ID: 36
Go straight or right, ID: 36
Go straight or right, ID: 36
Go straight or right, ID: 36
Traffic signals, ID: 26
Traffic signals, ID: 26
Traffic signals, ID: 26
Traffic signals, ID: 26
Traffic signals, ID: 26
Traffic signals, ID: 26
Slippery road, ID: 23
Slippery road, ID: 23
Slippery road, ID: 23
Slippery road, ID: 23
Slippery road, ID: 23
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Roundabout mandatory, ID: 40
Roundabout mandatory, ID: 40
Roundabout mandatory, ID: 40
Roundabout mandatory, ID: 40
Bumpy road, ID: 22
Bumpy road, ID: 22
Bumpy road, ID: 22
Bumpy road, ID: 22
Go straight or left, ID: 37
Go straight or left, ID: 37
Vehicles over 3.5 metric tons prohibited, ID: 16
Vehicles over 3.5 metric tons prohibited, ID: 16
Vehicles over 3.5 metric tons prohibited, ID: 16
Vehicles over 3.5 metric tons prohibited, ID: 16
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Speed limit (60km/h), ID: 3
Dangerous curve to the left, ID: 19
Dangerous curve to the left, ID: 19
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
Right-of-way at the next intersection, ID: 11
End of no passing by vehicles over 3.5 metric tons, ID: 42
End of no passing by vehicles over 3.5 metric tons, ID: 42
Speed limit (20km/h), ID: 0
Speed limit (20km/h), ID: 0
End of all speed and passing limits, ID: 32
End of all speed and passing limits, ID: 32
End of all speed and passing limits, ID: 32
Pedestrians, ID: 27
Pedestrians, ID: 27
Bicycles crossing, ID: 29
Bicycles crossing, ID: 29
Bicycles crossing, ID: 29
Road narrows on the right, ID: 24
Road narrows on the right, ID: 24
Road narrows on the right, ID: 24
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
No passing, ID: 9
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Speed limit (80km/h), ID: 5
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Keep right, ID: 38
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
Speed limit (120km/h), ID: 8
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
No passing for vehicles over 3.5 metric tons, ID: 10
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Ahead only, ID: 35
Turn left ahead, ID: 34
Turn left ahead, ID: 34
Turn left ahead, ID: 34
Turn left ahead, ID: 34
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
General caution, ID: 18
End of speed limit (80km/h), ID: 6
End of speed limit (80km/h), ID: 6
End of speed limit (80km/h), ID: 6
End of speed limit (80km/h), ID: 6
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Yield, ID: 13
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Speed limit (100km/h), ID: 7
Beware of ice/snow, ID: 30
Beware of ice/snow, ID: 30
Beware of ice/snow, ID: 30
Beware of ice/snow, ID: 30
Beware of ice/snow, ID: 30
Keep left, ID: 39
Keep left, ID: 39
Keep left, ID: 39
Double curve, ID: 21
Double curve, ID: 21
Double curve, ID: 21
Dangerous curve to the right, ID: 20
Dangerous curve to the right, ID: 20
Dangerous curve to the right, ID: 20
Dangerous curve to the right, ID: 20
Turn right ahead, ID: 33
Turn right ahead, ID: 33
Turn right ahead, ID: 33
Turn right ahead, ID: 33
Turn right ahead, ID: 33
Turn right ahead, ID: 33
Turn right ahead, ID: 33
Children crossing, ID: 28
Children crossing, ID: 28
Children crossing, ID: 28
Children crossing, ID: 28
Children crossing, ID: 28
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Priority road, ID: 12
Stop, ID: 14
Stop, ID: 14
Stop, ID: 14
Stop, ID: 14
Stop, ID: 14
Stop, ID: 14
Stop, ID: 14
Stop, ID: 14
No vehicles, ID: 15
No vehicles, ID: 15
No vehicles, ID: 15
No vehicles, ID: 15
No vehicles, ID: 15
No vehicles, ID: 15
No vehicles, ID: 15
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
No entry, ID: 17
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Speed limit (50km/h), ID: 2
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
Road work, ID: 25
In [5]:
y_train_org[range(0,34799,400)]
Out[5]:
array([41, 31, 31, 36, 26, 23,  1,  1,  1,  1,  1, 40, 22, 16,  3,  3,  3,
       19,  4,  4,  4,  4, 11, 11, 11,  0, 27, 24,  9,  9,  9,  5,  5,  5,
        5,  5, 38, 38, 38, 38,  8,  8,  8, 10, 10, 10, 10, 10, 35, 35, 35,
       18, 18, 18,  6, 13, 13, 13, 13, 13,  7,  7,  7, 30, 39, 20, 33, 33,
       28, 12, 12, 12, 12, 12, 14, 15, 15, 17, 17,  2,  2,  2,  2,  2, 25,
       25, 25], dtype=uint8)

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.

In [6]:
print(X_train_org.shape)
print(X_valid_org.shape)
print(X_test_org.shape)
print(X_train_org[0,0,0,:])
(34799, 32, 32, 3)
(4410, 32, 32, 3)
(12630, 32, 32, 3)
[28 25 24]
In [7]:
import cv2
import numpy as np

def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """

    np.set_printoptions(threshold=np.nan)
    from sklearn.preprocessing import LabelBinarizer
    
    # Turn labels into numbers and apply One-Hot Encoding
    encoder = LabelBinarizer()
    encoder.fit(range(43))
    
    x = encoder.transform(x)
    # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
    x = x.astype(np.float32)
    
    return x

y_train = one_hot_encode(y_train_org)
y_valid = one_hot_encode(y_valid_org)
y_test  = one_hot_encode(y_test_org)

def conv_to_concat_colorspace(image_batch):
    shape = np.asarray(image_batch.shape)
    shape[-1] = 4
    concat_batch = np.empty(shape)
    for idx in range(image_batch.shape[0]):
        concat_batch[idx,:,:,0:3] = cv2.cvtColor(image_batch[idx], cv2.COLOR_RGB2YCrCb )
        #concat_batch[idx,:,:,3:6] = cv2.cvtColor(image_batch[idx], cv2.COLOR_RGB2YUV )
        #concat_batch[idx,:,:,3] = cv2.cvtColor(image_batch[idx], cv2.COLOR_RGB2HLS)[:,:,2]
        concat_batch[idx,:,:,3] = cv2.cvtColor(image_batch[idx], cv2.COLOR_RGB2HSV)[:,:,1]
        #concat_batch[idx,:,:,5:8] = image_batch[idx]
        concat_batch[idx,:,:,:]   =  concat_batch[idx,:,:,:]/255
    return concat_batch

use_YCrCB_S = False
if use_YCrCB_S:
    %time X_train = conv_to_concat_colorspace(X_train_org)
    %time X_valid = conv_to_concat_colorspace(X_valid_org)
    %time X_test  = conv_to_concat_colorspace(X_test_org)
else:
    X_train = np.sum(X_train_org,axis=3, keepdims=True)/3/255
    X_valid = np.sum(X_valid_org,axis=3, keepdims=True)/3/255
    X_test  = np.sum(X_test_org,axis=3, keepdims=True)/3/255
In [40]:
def visualise_dataset(image,y_valid_org,step_size):
    plt_num = 1
    print(image.shape)
    
    for image_idx in range(0,image.shape[0],step_size):
        channels = image.shape[3]
        plt.figure(plt_num, figsize=(32,32))
        for channel in range(channels):
            plt.subplot(4,8, channel+1) # sets the number of feature maps to show on each row and column
            plt.title('channel ' + str(channel)) # displays the feature map number
            plt.imshow(image[image_idx,:,:, channel], interpolation="nearest", cmap="gray")
        plt.show()

        sign_id = y_train_org[image_idx]
        sign_name = df.values[int(sign_id),-1]
        print('{1}, ID: {0}'.format(sign_id,sign_name))
        
%time visualise_dataset(X_valid[:,:,:,:],y_valid_org,int(X_valid.shape[0]/15))       
(4410, 32, 32, 1)
End of no passing, ID: 41
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Wild animals crossing, ID: 31
Go straight or right, ID: 36
Traffic signals, ID: 26
Traffic signals, ID: 26
Slippery road, ID: 23
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Wall time: 13.9 s
In [9]:
if False:
    with open('X_train.pickle', 'rb') as handle:
        X_train = pickle.load(handle, protocol=pickle.HIGHEST_PROTOCOL)
    with open('X_valid.pickle', 'rb') as handle:
        X_valid = pickle.load(handle, protocol=pickle.HIGHEST_PROTOCOL)
    with open('X_test.pickle', 'rb') as handle:
        X_test = pickle.load(handle, protocol=pickle.HIGHEST_PROTOCOL)
        
if False:
    with open('X_train.pickle', 'wb') as handle:
        pickle.dump(X_train, handle, protocol=pickle.HIGHEST_PROTOCOL)
    with open('X_valid.pickle', 'wb') as handle:
        pickle.dump(X_valid, handle, protocol=pickle.HIGHEST_PROTOCOL)
    with open('X_test.pickle', 'wb') as handle:
        pickle.dump(X_test, handle, protocol=pickle.HIGHEST_PROTOCOL)
In [10]:
print(X_train.shape)
print(X_valid.shape)
print(X_test.shape)
print(X_train[0,0,0,:])
(34799, 32, 32, 1)
(4410, 32, 32, 1)
(12630, 32, 32, 1)
[ 0.10065359]

Model Architecture

In [11]:
import tensorflow as tf

def conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """    
    
    # Create the weight and bias
    biases  = tf.Variable(tf.zeros(conv_num_outputs))
    # TF 1.0 :
    #weights_depth = x_tensor.shape.as_list()[-1]
    # TF 0.12 :
    weights_depth = x_tensor.get_shape().as_list()[-1]
    weights_dim = [conv_ksize[0], conv_ksize[1], weights_depth, conv_num_outputs]
    weights = tf.Variable(tf.truncated_normal(weights_dim))
    
    # Apply Convolution
    conv_strides = [1, conv_strides[0], conv_strides[1], 1] # (batch, height, width, depth)
    padding = 'VALID'
    conv_layer = tf.nn.conv2d(x_tensor, weights, conv_strides, padding)

    # Add bias
    conv_layer = tf.nn.bias_add(conv_layer, biases)
    
    # Apply activation function
    conv_layer = tf.nn.relu(conv_layer)
    
    return conv_layer 

    
def pool(conv_layer, pool_ksize, pool_strides):

    filter_shape = [1, pool_ksize[0], pool_ksize[1], 1]
    pool_strides = [1, pool_strides[0], pool_strides[1], 1]
    padding = 'VALID'
    
    pool_max = tf.nn.max_pool(conv_layer, filter_shape, pool_strides, padding)
    #pool_avg = tf.nn.avg_pool(conv_layer, filter_shape, pool_strides, padding)
    
    #pool_max = tf.nn.fractional_max_pool(conv_layer,  [1.0, 2.0, 2.0, 1.0])
    #pool_avg = tf.nn.fractional_avg_pool(conv_layer,  [1.0, 2.0, 2.0, 1.0])
    
    #pool = tf.concat(3,[pool_max,pool_avg])

    return pool_max 


def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    dimensions = (x_tensor.get_shape().as_list()[1:4])
    
    prod = 1
    for dimension in dimensions:
        prod *= dimension
    
    x_tensor = tf.reshape(x_tensor, [-1,prod])
    return x_tensor


def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    tensor_out = tf.contrib.layers.fully_connected(x_tensor, num_outputs)
    return tensor_out


def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    
    x_tensor = x         #:param x_tensor: TensorFlow Tensor
    conv_strides = (1,1) #:param conv_strides: Stride 2-D Tuple for convolution
    pool_ksize = (2,2)   #:param pool_ksize: kernal size 2-D Tuple for pool
    pool_strides = (1,1) #:param pool_strides: Stride 2-D Tuple for pool
    conv_ksize = (3,3)
    
    conv_num_outputs = 5 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    x_tensor = conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides)
    conv_num_outputs = 7 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    x_tensor = conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides)
    #x_tensor = pool(x_tensor, pool_ksize, pool_strides)

    #conv_num_outputs = 17 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    #x_tensor = conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides)
    #x_tensor = conv2d(x_tensor, conv_num_outputs, conv_ksize, conv_strides)
    #x_tensor = pool(x_tensor, pool_ksize, pool_strides)
    

    return x_tensor

    
def fully_con_net(x_tensor, keep_prob):

    x_tensor = flatten(x_tensor)

    num_outputs = n_classes*4
    x_tensor = fully_conn(x_tensor, int(num_outputs))
    x_tensor = tf.nn.dropout(x_tensor, keep_prob)
    num_outputs = n_classes*3
    x_tensor = fully_conn(x_tensor, int(num_outputs))
    x_tensor = tf.nn.dropout(x_tensor, keep_prob)
    num_outputs = n_classes*2
    x_tensor = tf.nn.dropout(x_tensor, keep_prob)
    x_tensor = fully_conn(x_tensor, int(num_outputs))
    num_outputs = n_classes
    x_tensor = fully_conn(x_tensor, int(num_outputs))
    
    return x_tensor

def vgg_like(x_pre):
    # conv + pool
    conv_ksize = (3,3)
    conv_strides = (1,1) #:param conv_strides: Stride 2-D Tuple for convolution
    pool_ksize = (2,2)   #:param pool_ksize: kernal size 2-D Tuple for pool
    pool_strides = (1,1) #:param pool_strides: Stride 2-D Tuple for pool

    conv_num_outputs = 9 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    c00 = conv2d(x_pre, conv_num_outputs, conv_ksize, conv_strides)
    conv_num_outputs = 9 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    c01 = conv2d(c00, conv_num_outputs, conv_ksize, conv_strides)
    p0  = pool(c01, pool_ksize, pool_strides)

    conv_num_outputs = 9 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    c10 = conv2d(x_pre, conv_num_outputs, conv_ksize, conv_strides)
    conv_num_outputs = 9 #:param conv_num_outputs: Number of outputs for the convolutional layer   
    c11 = conv2d(c00, conv_num_outputs, conv_ksize, conv_strides)
    p1  = pool(c01, pool_ksize, pool_strides)

    tf_activation = p1

    # pick layers
    p1f = flatten(p1)
    p0f = flatten(p0)

    concat = tf.concat(1,[p1f,p0f])

    # fully connected layers
    num_outputs = n_classes*4
    x_tensor = fully_conn(concat, int(num_outputs))
    x_tensor = tf.nn.dropout(x_tensor, keep_prob)

    num_outputs = n_classes
    x_tensor = fully_conn(x_tensor, int(num_outputs))
    logits            = fully_con_net(x_tensor, keep_prob)

    return logits

def LeNet(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1
    
    # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
    conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(6))
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # SOLUTION: Activation.
    conv1 = tf.nn.relu(conv1)

    # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(16))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # SOLUTION: Activation.
    conv2 = tf.nn.relu(conv2)

    # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    
    tf_activation = conv2

    # SOLUTION: Flatten. Input = 5x5x16. Output = 400.
    fc0   = flatten(conv2)
    
    # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
    fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(120))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # SOLUTION: Activation.
    fc1    = tf.nn.relu(fc1)

    # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(84))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b
    
    # SOLUTION: Activation.
    fc2    = tf.nn.relu(fc2)

    # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(n_classes))
    logits = tf.matmul(fc2, fc3_W) + fc3_b
    
    return logits, tf_activation

def LeNet42(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 1/16
    
    x = tf.nn.local_response_normalization(x)
        
    # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
    depth_1 = 6
    conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, depth_1), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(depth_1))
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # Activation.
    conv1 = tf.nn.relu(conv1)

    # Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # Layer 2: Convolutional. Output = 10x10x16.
    depth_2 = 42
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, depth_1, depth_2), mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(depth_2))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # Activation.
    conv2 = tf.nn.relu(conv2)

    # Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    
    tf_activation = conv2

    # Flatten. Input = 5x5x16. Output = 400.
    fc0   = flatten(conv2)
    
    # Layer 3: Fully Connected.
    depth_in  = int(fc0.get_shape()[1])
    depth_out = int(depth_in * 120/400)
    fc1_W = tf.Variable(tf.truncated_normal(shape=(depth_in, depth_out), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(depth_out))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # Activation.
    fc1    = tf.nn.relu(fc1)

    # Layer 4: Fully Connected.
    depth_in  = int(fc1.get_shape()[1])
    depth_out = int(depth_in * 84/120)
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(depth_in, depth_out), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(depth_out))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b
    
    fc2 = tf.nn.dropout(fc2, keep_prob)
    # Activation.
    fc2    = tf.nn.relu(fc2)

    # Layer 5: Fully Connected.
    depth_in  = int(fc2.get_shape()[1])
    depth_out = n_classes
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(depth_in, depth_out), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(depth_out))
    fc3    = tf.matmul(fc2, fc3_W) + fc3_b
    
    # Activation.
    #fc3    = tf.nn.relu(fc3)
    
    # Layer 6: Fully Connected.
    #depth_in  = int(fc3.get_shape()[1])
    #fc4_W  = tf.Variable(tf.truncated_normal(shape=(depth_in, n_classes), mean = mu, stddev = sigma))
    #fc4_b  = tf.Variable(tf.zeros(n_classes))
    #logits = tf.matmul(fc3, fc4_W) + fc4_b
    
    logits = fc3
    
    return logits, tf_activation


##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
image_shape = X_train.shape[1:]

x = tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]],name = 'x')
y = tf.placeholder(tf.float32, shape=[None, n_classes],name = 'y')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')

# Model

x_pre = x#tf.nn.dropout(x, keep_prob) #:param x_tensor: TensorFlow Tensor
x_pre = tf.nn.local_response_normalization(x_pre)

logits, tf_activation = LeNet42(x_pre)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
    
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
#optimizer = tf.train.RMSPropOptimizer(0.001).minimize(cost)


# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.

In [13]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.

def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    
    #x = neural_net_image_input((32, 32, 3))
    #y = neural_net_label_input(10)
    #keep_prob = neural_net_keep_prob_input()
    
    #print(label_batch)
    config=tf.ConfigProto(#allow_soft_placement=True,
                          #log_device_placement=True,
                         # device_count = {'GPU': 8}
         )
    sess = tf.Session(config=config)
    
    feed_dict={keep_prob:keep_probability,x:feature_batch,y:label_batch}
    
    #print(feed_dict)

    session.run(optimizer,feed_dict=feed_dict)
    
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """   
    feed_dict={keep_prob:1.,x:feature_batch,y:label_batch}
    
    # Should this be done on validation data?
    # Calculate batch loss and accuracy
    loss = session.run(cost,feed_dict=feed_dict)
    valid_acc = sess.run(accuracy,feed_dict=feed_dict)
    #print('Loss: {0:9.5f} Accuracy: {1:9.5f}'.format(loss,valid_acc))
    return loss, valid_acc
    
def get_next_batch(batches, batch_size = 64):
    n_batches = len(batches)//batch_size
    for idx in range(0,n_batches,batch_size):
        batch = batches[idx:idx+batch_size,:,:,:]
        yield [x,y]
       
# Parameters
epochs = 100  # 1000 @ 128 => 3430 sec / ~1h
batch_size = 1000
keep_probability = 0.25

saved_acc = .0
save_model_path = './'

saver = tf.train.Saver()

import time
start = time.time()
print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    try:
        saver.restore(sess, save_model_path)
        print('Restored Model.')
    except:
        sess.run(tf.global_variables_initializer())
        print('Initialised Model...')
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches 
        for idx in range(0,int(len(X_train)/batch_size)):
            n_batches = len(X_train)//batch_size
            batch_idx =  ((np.random.random(batch_size)*X_train.shape[0])).astype(int)
            batch_features = X_train[batch_idx,:,:,:]
            batch_labels   = y_train[batch_idx]
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            
            if idx % 5 == 0:
                loss, acc = print_stats(sess, batch_features, batch_labels, cost, accuracy)
                loss_valid, acc_valid = print_stats(sess, X_valid, y_valid, cost, accuracy)
                print('Epoch {:>4}, Batch {:>6}: Loss: {:9.5f} Accuracy: {:9.5f} Validation: Loss: {:9.5f} Accuracy: {:9.5f}'.format(
                    epoch + 1, idx,loss,acc,loss_valid, acc_valid))
                
            if acc_valid > saved_acc:
                try:
                    save_path = saver.save(sess, save_model_path)
                    saved_acc = acc_valid
                    print('Model saved')
                except:
                    print('Model saving failed')

        if acc_valid > 0.93:
           break
    
print('Training time : {}'.format(time.time() - start))
Training...
Restored Model.
Epoch    1, Batch      0: Loss:   0.05512 Accuracy:   0.98600 Validation: Loss:   0.25021 Accuracy:   0.92925
Model saved
Epoch    1, Batch      5: Loss:   0.06028 Accuracy:   0.98500 Validation: Loss:   0.24401 Accuracy:   0.92902
Epoch    1, Batch     10: Loss:   0.04645 Accuracy:   0.98600 Validation: Loss:   0.26445 Accuracy:   0.92608
Epoch    1, Batch     15: Loss:   0.05466 Accuracy:   0.98900 Validation: Loss:   0.24992 Accuracy:   0.93039
Model saved
Epoch    1, Batch     20: Loss:   0.04740 Accuracy:   0.99100 Validation: Loss:   0.25271 Accuracy:   0.92766
Epoch    1, Batch     25: Loss:   0.04957 Accuracy:   0.99000 Validation: Loss:   0.24258 Accuracy:   0.93107
Model saved
Epoch    1, Batch     30: Loss:   0.04485 Accuracy:   0.98900 Validation: Loss:   0.25933 Accuracy:   0.92698
Epoch    2, Batch      0: Loss:   0.05262 Accuracy:   0.99000 Validation: Loss:   0.24336 Accuracy:   0.93379
Model saved
Epoch    2, Batch      5: Loss:   0.03613 Accuracy:   0.99200 Validation: Loss:   0.28730 Accuracy:   0.92109
Epoch    2, Batch     10: Loss:   0.04914 Accuracy:   0.98800 Validation: Loss:   0.24847 Accuracy:   0.93152
Epoch    2, Batch     15: Loss:   0.05539 Accuracy:   0.99300 Validation: Loss:   0.24374 Accuracy:   0.93152
Epoch    2, Batch     20: Loss:   0.05682 Accuracy:   0.98900 Validation: Loss:   0.26506 Accuracy:   0.92585
Epoch    2, Batch     25: Loss:   0.05004 Accuracy:   0.99100 Validation: Loss:   0.24516 Accuracy:   0.93333
Epoch    2, Batch     30: Loss:   0.03547 Accuracy:   0.99400 Validation: Loss:   0.23938 Accuracy:   0.93243
Training time : 230.9254765510559

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [16]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
visualise_dataset(X_test[:,:,:,:],y_test)       
(12630, 32, 32, 1)
End of no passing, ID: 41
Wild animals crossing, ID: 31
Traffic signals, ID: 26
Speed limit (30km/h), ID: 1
Speed limit (30km/h), ID: 1
Roundabout mandatory, ID: 40
Vehicles over 3.5 metric tons prohibited, ID: 16
Speed limit (60km/h), ID: 3
Dangerous curve to the left, ID: 19
Speed limit (70km/h), ID: 4
Speed limit (70km/h), ID: 4
Right-of-way at the next intersection, ID: 11
Speed limit (20km/h), ID: 0
Road narrows on the right, ID: 24
No passing, ID: 9

Predict the Sign Type for Each Image

In [66]:
save_model_path = './'
n_samples = 12
top_n_predictions = 5

import random
import pandas as pd
from sklearn.preprocessing import LabelBinarizer

def _load_label_names():
    """
    Load the label names from file
    """
    df = pd.read_csv('./signnames.csv')
    return df['SignName'].values

def display_image_predictions(features, labels, predictions,n_samples,top_n_predictions):
    n_classes = 43
    label_names = _load_label_names()
    label_binarizer = LabelBinarizer()
    label_binarizer.fit(range(n_classes))
    label_ids = label_binarizer.inverse_transform(np.array(labels))

    fig, axies = plt.subplots(nrows=n_samples, ncols=2)
    #fig.tight_layout()
    #fig.suptitle('Softmax Predictions', fontsize=20, y=1.1)
    fig.suptitle('Softmax Predictions', fontsize=44)
    fig.set_size_inches(22, n_samples*2)
    
    n_predictions = top_n_predictions  ### Check this later, computer is busy
    margin = 0.05
    ind = np.arange(n_predictions)
    width = (1. - 2. * margin) / n_predictions

    for image_i, (feature, label_id, pred_indicies, pred_values) in enumerate(zip(
        features, label_ids, predictions.indices, predictions.values)):
        
        pred_names = [label_names[pred_i] for pred_i in pred_indicies]
        correct_name = label_names[label_id]
        
        axies[image_i][0].imshow(feature[:,:,0], interpolation="nearest", cmap="gray")
        axies[image_i][0].set_title(correct_name)
        axies[image_i][0].set_axis_off()

        axies[image_i][1].barh(ind + margin, pred_values[::-1], width)
        axies[image_i][1].set_yticks(ind + margin)
        axies[image_i][1].set_yticklabels(pred_names[::-1])
        axies[image_i][1].set_xticks([0, 0.5, 1.0])
        

def test_model(X_test,y_test,batch_size,random_sampels=True):
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = X_test, y_test

    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        #for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
        n_batches = len(X_test)//batch_size
        for idx in range(0,n_batches,batch_size):
            train_feature_batch = X_test[idx:idx+batch_size,:,:,:]
            train_label_batch   = y_test[idx:idx+batch_size]
            
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        if random_sampels :
            random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        else:
            random_test_features, random_test_labels = X_test,y_test
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        display_image_predictions(random_test_features, random_test_labels, random_test_predictions,n_samples,top_n_predictions)
    
    return random_test_predictions


_ = test_model(X_test,y_test,batch_size)
Testing Accuracy: 0.9409999847412109

Analyze Performance

In [67]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

n_samples = 5

webimages = np.empty([n_samples,32,32,3])
webimages[0] = cv2.imread("webimages/1.jpg")
webimages[1] = cv2.imread("webimages/2.jpg")
webimages[2] = cv2.imread("webimages/3.jpg")
webimages[3] = cv2.imread("webimages/4.jpg")
webimages[4] = cv2.imread("webimages/5.jpg")

#visualise_dataset(webimages[:,:,:,:],y_test,1)      

np.sum(webimages,axis=3, keepdims=True)/3/255

random_test_predictions = test_model(webimages_gray,one_hot_encode([23,17,14,26,1]),batch_size=5,random_sampels=False)

#print(random_test_predictions.values)
Testing Accuracy: 0.6000000238418579

[[  9.77029383e-01   2.19183285e-02   8.18954257e-04   1.74098066e-04
    5.30776197e-05]
 [  9.96126950e-01   3.33242235e-03   4.47420840e-04   8.90796800e-05
    1.77997424e-06]
 [  7.02272236e-01   1.48365945e-01   9.19106230e-02   2.69952789e-02
    1.24656009e-02]
 [  4.47650850e-01   4.44780976e-01   8.56028497e-02   1.49302557e-02
    5.40654548e-03]
 [  7.97175407e-01   1.86552942e-01   5.91719337e-03   3.18665686e-03
    2.19294964e-03]]

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

In [22]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.

Step 4: Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Combined Image

Your output should look something like this (above)

In [26]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input,keep_prob:1})
    print(activation.shape)
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
            
            
image =  X_train[-1:,:,:,:]
fig, ax = plt.subplots()
ax.imshow(image[0,:,:,0],interpolation="nearest", cmap="gray")
plt.show()


with tf.Session() as sess:
    # Initializing the variables
    try:
        saver.restore(sess, save_model_path)
        print('Restored Model...')
    except:
        sess.run(tf.global_variables_initializer())
        print('Initialised Model...')
    outputFeatureMap(image, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1)
Restored Model...
(1, 5, 5, 42)

Question 9

Discuss how you used the visual output of your trained network's feature maps to show that it had learned to look for interesting characteristics in traffic sign images

Answer:

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.

In [ ]:
##Writeup Template

Build a Traffic Sign Recognition Project

The goals / steps of this project are the following:

  • Load the data set (see below for links to the project data set)
  • Explore, summarize and visualize the data set
  • Design, train and test a model architecture
  • Use the model to make predictions on new images
  • Analyze the softmax probabilities of the new images
  • Summarize the results with a written report

Rubric Points

Data Set Summary & Exploration

1. Provide a basic summary of the data set and identify where in your code the summary was done. In the code, the analysis should be done using python, numpy and/or pandas methods rather than hardcoding results manually.

The code for this step is contained in the second code cell of the IPython notebook.

I used the pandas library to load the signnames

Number of training examples = 34799 Number of testing examples = 12630 Image data shape = (32, 32) Number of classes = 43

2. Include an exploratory visualization of the dataset and identify where the code is in your code file.

I just plottet the images, ids and sign names. I know this signs very well so i could also verify that the labels are correct.

Design and Test a Model Architecture

  • I added a HLS colorspace to the data.
  • Did max-scaling
  • I used 3 Layers conv2d_maxpool and 3 fully connected Layers
  • The model did learn very fast but stuck early

  • then i tried a VGG like arhitecture als so with not good enough results

  • after that i modifiyed the LeNet Architecture

  • going deeper resultet in no learining at all
  • going wirder did work

  • after still not getting good enough results i added normalisiotion to the model input and dropout in the dense layers and retrained the model with succsess

2. Training, validation and testing data

i did go with the existing data splits.

There was a lot of data, it seems it comes from videosequences moving mostly towards the signs.

I jused random batches to get rid of the sorting/sequencing.

Because of the bigger model i reduced the std_dev for the inits to 1/16.

3. Describe, and identify where in your code, what your final model architecture looks like including model type, layers, layer sizes, connectivity, etc.) Consider including a diagram and/or table describing the final model.

The code for my final model is located in the seventh cell of the ipython notebook.

x = tf.nn.local_response_normalization(x)


My final model consisted of the following layers:

Layer Description
Input 32x32x1 Gray image
Normalisation i used local_response_normalization
Convolution 3x3 1x1 stride, valid padding, outputs 28x28x6
RELU
Max pooling 2x2 stride, outputs 14x14x6
Convolution 3x3 1x1 stride, valid padding, outputs 14x14x42.
RELU
Max pooling 2x2 stride, outputs 7x7x42
Fully connected x3 Layers i calculated the width depending on the input width.
Softmax

4. Describe how, and identify where in your code, you trained your model. The discussion can include the type of optimizer, the batch size, number of epochs and any hyperparameters such as learning rate.

My Hyperparameters gets tuned all the time i end up with: epochs = 100
batch_size = 1000 keep_probability = 0.25

the epochs loop exits when the needed accuracy was hit

I tried the RMSPropOptimizer but end up with AdamOptimizer

5. Describe the approach taken for finding a solution. Include in the discussion the results on the training, validation and test sets and where in the code these were calculated. Your approach may have been an iterative process, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think the architecture is suitable for the current problem.

The code for calculating the accuracy of the model is located in the ninth cell of the Ipython notebook.

My final model results were:

  • Training Accuracy: 0.99400
  • Validation Accuracy: 0.93243
  • Testing Accuracy: 0.9409999847412109

I tryed a lot of diffrent approches Some didn train at all, others did not reach enough accuracy Best results i get by adopting the width of the layers my maintaining the proportions of LeNet To avoid overfitting i retrained with normalisation at the inputlayer and increasing dropout

Test a Model on New Images

1. Choose five German traffic signs found on the web and provide them in the report. For each image, discuss what quality or qualities might be difficult to classify.

Here are five German traffic signs that I found on the web:

alt text

All pictures had some difficultys 1: just signs in the background 2: overpainted 3: other text 4: small, other signs on picture, easy to confuse in grayscale 5: rotated, distorted, broken, on ground, dirty

2. Discuss the model's predictions on these new traffic signs and compare the results to predicting on the test set. Identify where in your code predictions were made. At a minimum, discuss what the predictions were, the accuracy on these new predictions, and compare the accuracy to the accuracy on the test set (OPTIONAL: Discuss the results in more detail as described in the "Stand Out Suggestions" part of the rubric).

The code for making predictions on my final model is located in the tenth cell of the Ipython notebook.

Here are the results of the prediction:

Image Prediction
slippery road slippery road
no entry no entry
stop stop
traffic signals general caution
speedlimit 30 speedlimit 50

The model was able to correctly guess 2 of the 5 traffic signs, which gives an accuracy of 60%. This compares favorably to the accuracy on the test set of ...

3. Describe how certain the model is when predicting on each of the five new images by looking at the softmax probabilities for each prediction and identify where in your code softmax probabilities were outputted. Provide the top 5 softmax probabilities for each image along with the sign type of each probability. (OPTIONAL: as described in the "Stand Out Suggestions" part of the rubric, visualizations can also be provided such as bar charts)

The code for making predictions on my final model is located in the 11th cell of the Ipython notebook.

For the first image, the model is relatively sure that this is a stop sign (probability of 0.6), and the image does contain a stop sign. The top five soft max probabilities were

Probability Prediction
0.977029383 slippery road
0.996126950 no entry
0.702272236 stop
0.447650850 general caution
0.797175407 speedlimit 50

compleate np array:

[[ 9.77029383e-01 2.19183285e-02 8.18954257e-04 1.74098066e-04 5.30776197e-05] [ 9.96126950e-01 3.33242235e-03 4.47420840e-04 8.90796800e-05 1.77997424e-06] [ 7.02272236e-01 1.48365945e-01 9.19106230e-02 2.69952789e-02 1.24656009e-02] [ 4.47650850e-01 4.44780976e-01 8.56028497e-02 1.49302557e-02 5.40654548e-03] [ 7.97175407e-01 1.86552942e-01 5.91719337e-03 3.18665686e-03 2.19294964e-03]]

In [ ]: